5 research outputs found

    On-Board Deep Learning for Payload Data Processing: Hardware Performance Comparison

    Get PDF
    The path towards a multi-planetary species passes through the implementation of disruptive technological innovation. Artificial Intelligence and autonomy on spacecraft will be a fundamental part of this future. Hence, leveraging on-the-edge AI accelerators, such as FPGAs, GPUs, VPUs, ASICs, will constitute an essential component of the spacecraft hardware of tomorrow. This work presents a comparative work, specifically targeted to the use of on-board satellites. The tested platforms are Intel Myriad X, Nvidia Jetson Nano, and CPU (x64 architecture

    An AI-Based Goal-Oriented Agent for Advanced On-Board Automation

    Get PDF
    In the context of fierce competition arising in the space economy, the number of satellites and constellations that will be placed in orbit is set to increase considerably in the upcoming years. In such a dynamic environment, raising the autonomy level of the next space missions is key to maintaining a competitive edge in terms of the scientific, technological, and commercial outcome. We propose the adoption of an AI-based autonomous agent aiming to fully enable spacecraft’s goal-oriented autonomy. The implemented cognitive architecture collects input starting from the sensing of the surrounding operating environment and defines a low-level schedule of tasks that will be carried out throughout the specified horizon. Furthermore, the agent provides a planner module designed to find optimal solutions that maximize the outcome of the pursued objective goal. The autonomous loop is closed by comparing the expected outcome of these scheduled tasks against the real environment measurements. The entire algorithmic pipeline was tested in a simulated operational environment, specifically developed for replicating inputs and resources relative to Earth Observation missions. The autonomous reasoning agent was evaluated against the classical, non-autonomous, mission control approach, considering both the quantity and the quality of collected observation data in addition to the quantity of the observation opportunities exploited throughout the simulation time. The preliminary simulation results point out that the adoption of our software agent enhances dramatically the effectiveness of the entire mission, increasing and optimizing in-orbit activities, on the one hand, reducing events\u27 response latency (opportunities, failures, malfunctioning, etc.) on the other. In the presentation, we will cover the description of the high-level algorithmic structure of the proposed goal-oriented reasoning model, as well as a brief explanation of each internal module’s contribution to the overall agent’s architecture. Besides, an overview of the parameters processed as input and the expected algorithms\u27 output will be provided, to contextualize the placement of the proposed solution. Finally, an Earth Observation use case will be used as the benchmark to test the performances of the proposed approach against the classical one, highlighting promising conclusions regarding our autonomous agent’s adoption

    Toward Autonomous Guidance and Control: A Robust AI-Based Solution for Low-Thrust Orbit Transfers

    Get PDF
    The focus of our initial application scenario centers around a low-thrust orbit transfer in Low-Earth Orbit (LEO). This specific use-case has been chosen due to its inherent challenges, including the requirements for robustness and real-time computation. We propose an AI-based solution capable of autonomous and robust on-board G&C. The core of our approach leverages a Deep Neural Network (DNN) trained through Reinforcement Learning (RL) techniques. Our method aims at enhancing a traditional guidance approach by managing environmental perturbations, it processes the on-board navigation coordinates and provides the thrust to be imposed by the propulsion subsystem. Our approach demonstrates effectiveness in performing maneuvers changing semi-major axis (SMA), eccentricity (ECC), and inclination (INC), operating continuously with a control horizon of several days. Robustness is tested by using physical model uncertainties, introducing disturbances in the mission coordinates, and injecting perturbations in subsystems

    Deep Learning Application to Surface Properties Retrieval Using TIR Measurements: A Fast Forward/Reverse Scheme to Deal with Big Data Analysis from New Satellite Generations

    No full text
    In recent years, technology advancement has led to an enormous increase in the amount of satellite data. The availability of huge datasets of remote sensing measurements to be processed, and the increasing need for near-real-time data analysis for operational uses, has fostered the development of fast, efficient-retrieval algorithms. Deep learning techniques were recently applied to satellite data for retrievals of target quantities. Forward models (FM) are a fundamental part of retrieval code development and mission design, as well. Despite this, the application of deep learning techniques to radiative transfer simulations is still underexplored. The DeepLIM project, described in this work, aimed at testing the feasibility of the application of deep learning techniques at the design of the retrieval chain of an upcoming satellite mission. The Land Surface Temperature Mission (LSTM) is a candidate for Sentinel 9 and has, as the main target, the need, for the agricultural community, to improve sustainable productivity. To do this, the mission will carry a thermal infrared sensor to retrieve land-surface temperature and evapotranspiration rate. The LSTM land-surface temperature retrieval chain is used as a benchmark to test the deep learning performances when applied to Earth observation studies. Starting from aircraft campaign data and state-of-the-art FM simulations with the DART model, deep learning techniques are used to generate new spectral features. Their statistical behavior is compared to the original technique to test the generation performances. Then, the high spectral resolution simulations are convolved with LSTM spectral response functions to obtain the radiance in the LSTM spectral channels. Simulated observations are analyzed using two state-of-the-art retrieval codes and deep learning-based algorithms. The performances of deep learning algorithms show promising results for both the production of simulated spectra and target parameters retrievals, one of the main advances being the reduction in computational costs
    corecore